Introduction

My e‑portfolio tells the story of my journey through my latest module, Machine Learning. Every unit is broken down with Gibbs’ Reflective Cycle (The University of Edinburgh, 2024), and I finish with a big‑picture reflection that also borrows the “What? So what? Now what?” steps from Rolfe et al. (2001). Walland & Shaw (2020) go on and tell us that an e‑portfolio is both process and product, so you’ll see my messy thinking as well as my polished results.

Reflection Approach: Gibbs’ Reflective Cycle

Why Gibbs’ Reflective Cycle? Because it is well rounded and has successfully helped me through several modules now. The cycle involves six key steps:

  • Description: on the learning experience throughout each unit in the module.
  • Feelings: on my emotional response to initially viewing the units and then feedback on the complete projects.
  • Evaluation: of the entire unit, the good and the bad.
  • Analysis: on the units in hindsight.
  • Conclusion: on the unit and what I have learned and what I would have done differently.
  • Action Plan: on how I would deal with the next unit or module.
  •  

    Gibbs’ Reflective Cycle keeps me honest: I describe, feel, evaluate, analyse, conclude, action‑plan – no corners cut throughout each unit.

    Reflection on Each Unit

    Unit 1 – Introduction to Machine Learning

  • Description: Compared ML with stats models in the Lecturecast; wrote a discussion post on the 4th Industrial Revolution and the Hungarian GP IT outage.
  • Feelings: Finally a module directly aligned with my professional interests
  • Evaluation: Used the discussion post as a reason to talk about F1. Great reason to finally look deeper into what happened Hungarian GP 2024
  • Analysis: Machine learning (ML) lives inside bigger, human‑centred systems. When the GP timing screens died, nobody cared about fancy models – they cared about fallback plans.
  • Conclusion: Tech should augment human judgment, not try replace it. Echoes Industry 5.0 thinking.
  • Action Plan: Started backing up all my code and organized my coworkers access to the code on critical reporting and dashboard applications.
  • collaboration discussion Unit 1-3
  •    

    Unit 2 – Exploratory Data Analysis

  • Description: Revised EDA steps and received a peer review on my collaboration discussion.
  • Feelings: Confident, the last module we worked extensively on EDAs and i received a distinction.
  • Evaluation: Peer feedback reminded me to emphasize action plans and fallbacks for when issues like Hungarian GP 2024 happen.
  • Analysis: Transparent EDA underpins fallbacks and "what if it does not work".
  • Conclusion: Negatives can strengthen the positives. Dont ignore them.
  • Action Plan: Focus more on potential fallbacks if EDA fails or assumptions are wrong during EDA process.
  • collaboration discussion Unit 1-3
  •  

    Unit 3 – Correlation & Regression

  • Description: Played with covariance, Pearson r, linear & polynomial fits; wrote my summary post.
  • Feelings: At first a bit overwhelmed by the raw mathematics and theory but once i saw the code it all clicked
  • Evaluation: Used Python experiments to show variance.
  • Analysis: Correlation drives prediction, but without stress‑tests it’s brittle.
  • Conclusion: Correlation and regression are great for prediction, but in real-world systems, knowing how and when they fail is just as important.
  • Action Plan: Automate “corrupt‑a‑copy” tests (shuffle labels, inject noise) before signing off any model.
  • collaboration discussion Unit 1-3
  • linear and Polynomial regression: Some examples of visuals from the python work in the unit. Predicting values using regression.
  •  

    Unit 4 – Linear Regression with Scikit‑Learn

  • Description: Refreshed uni‑ & multi‑variate regression in scikit‑learn; engine‑size vs emissions plot.
  • Feelings: Simple revision. Basics
  • Evaluation: Pretty basic unit. Enjoyed working with new types of data though.
  • Analysis: The right plot matters. When viewing data in a plot, try different plots.
  • Conclusion: Tools help, understanding wins.
  • Action Plan: Make sure you try all plots before assuming no correlation. Like a pairplot
  • Example of linear regression using Scikit‑Learn: Plot showing how emissions rise with engine size.
  • Project highlight
     

    Unit 5 – Clustering

  • Description: Learnt about clustering techniques like K-means, hierarchical clustering, and DBSCAN. Did exercises with new types of data using the Jaccard coefficient.
  • Feelings: Found the unfamiliar datasets both engaging and intellectually stimulating. – stretched my pattern‑spotting.
  • Evaluation: Good: Picked the correct metrics. Bad: Data was not as clean as i thought it was.
  • Analysis: Clusters without domain context can mislead – “pretty blobs” aren’t insights.
  • Conclusion: Understand domain context first, always
  • Action Plan: Chat to execs, managers etc for domain context first.
  • Visual representation of how k-means clustering works visually
  •  

    Unit 6 – Clustering with Python (Team Project)

  • Description: Built EDA & core code for NYC Airbnb “quiet neighbourhood” recommender with experienced teamate; teammates wrote report.
  • Feelings: I really enjoyed the programming and the collaboration. I was not as present for the reporting side of the project due to focusing on the coding. Missed a good opportunity to improve my reporting.
  • Evaluation: Proud of the work i handed in. Great team put it all together in a report.
  • Analysis: Great idea and code from my side. Needed to make more time to work with other half of team on reporting.
  • Conclusion: Take more opportunities to learn.
  • Action Plan: Keep in contact with team. Learn from the guys who wrote the distinction worth report.
  • Unit 6 Group Project
  •  

    Unit 7 – Intro to Artificial Neural Networks (ANN)

  • Description: Dug into perceptrons, AND/OR gates, delta rule.
  • Feelings: Theory‑heavy, itching to code.
  • Evaluation: I remember learning about how ANNs are based off of human brains. Still as facinating to see it now.
  • Analysis: Understanding the logic behind how ANNs work help explain the issue of black box coding.
  • Conclusion: Remember to dive into the logic more often, no matter how theory-heavy. It is always worth it.
  • Action Plan: Get back to writing papers. Write a basic paper on ANNs after the module.
  •  

    Unit 8 – Training an ANN

  • Description: Python exercise on learning‑rate vs iterations
  • Feelings: Relief to code again; Disappointed by my lack of participation in the second collaboration discussion forum due to time constraints this module.
  • Evaluation: Great little project on the importance of too many and too few iterations.
  • Analysis: Iteration counts = cost; law/ethics = reputational cost – both must be optimised.
  • Conclusion: Performance is multi‑objective.
  • Action Plan: Build a tiny “ethics validator” which i will now into future demos. Which i found in one of my peers collaboration discussions.
  • Cost (error) vs iterations: Visual on how you can miss the lowest point and have high error with low iteration. Though too many iterations is a waste of time.
  • Project highlight
     

    Unit 9 – Intro to CNNs

  • Description: CIFAR‑10 baseline, 80 %+ accuracy; explored cnn‑explainer visual.
  • Feelings: Excited by the opportunity to revisit CNNs
  • Evaluation: I dont work with CNNs enough. Would like to do more. Many real world use cases for it, like at my tracking company. Image recognition can assist with tracking stolen vehicles.
  • Analysis: Need to find more time to do this at work. Might write my unit 11 project on CNNs and present to execs at work.
  • Conclusion: Build more CNNs. See how far you can take them
  • Action Plan: Plan to build and write reports on CNNs.
  • My favourite visual on CNNs methodology
  •  

    Unit 10 – Natural Language Processing

  • Description: Theory sprint on transformers, BLEU/ROUGE; no code task.
  • Feelings: Was hoping for more on this as it is the big rave in AI right now.
  • Evaluation: Took good notes.
  • Analysis: Visual aids and a python project can strengthen this module.
  • Conclusion: Need self-set mini projects and research before moving onto next module.
  • Action Plan: Fine tuned a small ollama project for my company over the weekend.
  •  

    Unit 11 – Model Selection & Evaluation (Presentation)

  • Description: Built improved CNN for dash‑cam objects; made a 20‑min exec presentation
  • Feelings: Nerves → adrenaline → success. Project was a big one, presenting is not yet a natural skill. I need to work on it.
  • Evaluation: Slides were built very well. Research was thorough Worried nerves show in slide show.
  • Analysis: Presenting to non‑tech execs = balancing act between rigour and clarity – my Achilles heel.
  • Conclusion: Reporting still the gap (déjà vu from last module). But definitely better
  • Action Plan: Show reports to my partner before going live. Seems to have worked this time. Waiting for marks.
  • Unit 11 project transcript
  • Unit 11 project detailed slideshow
  •  

    Unit 12 – Industry 4.0 & ML

  • Description: Deepfakes, digital twins, job prospects; explored DALL‑E images.
  • Feelings: Insightful, with some ethically concerning implications.
  • Evaluation: Good synthesis with Unit 1 discussions.
  • Analysis: Industry 5.0 will judge ML on trust as much as accuracy.
  • Conclusion: Stay curious, stay ethical.
  • Action Plan: Remember anything you build can be used for all the wrong reasons.
  •  

    Reflection Piece

     

    What? Summary of journey

    Twelve units later and I have taken another step in my algorithm skills, but just as importantly, I have seen where I still need to learn. The module felt familiar, as I have several years of experience in Machine Learning. Though the module brought new ideas to the table that I have not dived deep enough into throughout my career.

    I am proud of what I delivered. From investigating and reporting on a famous IT outage to building my first practical CNN project for a company I work for. I have learnt a lot this module. Each module introduced me to insights into neural networks I had yet to discover. Though I do need to dive deeper into large language models (LLM) and transformers myself. I have more confidence with machine learning than before, and now I know the importance of taking ethics and communication into account.

    As Walland & Shaw (2020) insist, an e‑portfolio is about the process behind the artefacts. So, the real story sits in the invisible layers: late‑night debugging, bad ping, “A‑ha!” moments when domain context finally clicks, and that performance anxiety during live presentations.

    So what? Digging into growth, pain points, and patterns

    Theme Wins Learning Opportunity Evidence / Trigger
    Technical depthCNN accuracy > 80 %; wrote clean modular code. No issues with code throughout moduleLimited exposure to LLMs and transformersUnit 9 notebook and unit 11 project show programming skill
    Ethical & resilience lensStarted “red‑team” mindset; added fall‑back design to dashboardsMissed the unit 8 collaboration discussionUnit 11 project shows growth in ethical lens and red-team mindset
    Communication / storytellingReports clearer than last module.Saw how far I still have to go when I had to present the slideshow to executives not familiar with machine learningFeedback from unit 1 collaboration discussion. Unit 6 team project feedback from team when handing over ideas and notebook
    CollaborationSmooth and enjoyable pairing with team during unit 6 team projectDue to work commitments, I had the voluntary option to assist with the reporting side of the project after completing the coding side. I was unable to make it due to time constraints.Team feedback and all following each other on LinkedIn.
    Time‑managementThe team and i finished unit 6 by the end of unit 5. Ahead of schedule early doorsWork crisis caused university to take a back foot for a few weeks. Had to pull late nights to catch up.Unit 6 being handed in several days early by team
    Reflection qualityUsed Gibbs (The University of Edinburgh, 2024) consistently; action plans more in depthStorytelling still needs workSee e-portfolio website on latest machine learning module

    Key Insights

    • 1. My default mental model is build first, narrate later. Will work on domain context first.
    • 2. Ethics and resilience need to be taken into account. Important i see likely hood of my project failing and who it may affect.
    • 3. Live presentations are my new bottleneck. Though my storytelling still needs work, live presenting gives me a lot of nerves.

    Now What? Concrete roadmap

    Goal Method / Tool Success Metric Due Date
    LLM and Transformer experienceCreate, using fake data, a simple LLM for my companyGet colleagues to chat to it2025-08-31
    Improve ethics & resilienceImplement ethical checks for all my teams projects as well as resilience checks.Get team to do the same. Discuss at next months meeting2025-08-04
    Presentation practiseImprove performance anxiety and presentation skillPresent to team with face camera on during meetings2025-08-04
    Wordiness trimImprove storytelling and reportingGet colleagues and my partner to read over work projects and personal/university projects respectively. The only way to improve is to be open to criticism2025-08-31
    Time ManagementImprove planning, expect work fallbacksGet ahead of next module but also set university time aside no matter the work commitments.2025-09-30

    Professional Skills Matrix & Evidence of Development

    Skill / Competency How I Developed It in the ML Module Concrete Evidence / Artefact
    Machine‑Learning ModellingImplemented regression, clustering, ANN, CNN; tuned hyper‑parameters and compared modelsUnit 6 and 11 projects
    Resilience & EthicsEDA now includes resilience and ethic checksUnit 6 and 11 projects
    Data Visualisation & StorytellingCreated exec‑friendly slides; compared models visually; Delivered 20‑min talk to non‑tech execs; peer‑feedback dialogues;Unit 1 collaboration discussion and Unit 6 and 11 projects
    Collaboration & LeadershipLed workable prodotype in unit 6 project and worked alongside team of fellow expertsUnit 6 project

    Conclusion

    I started the module feeling like “This is my domain, machine learning is my day job.” Halfway in I realized I have not been emphasising resilience and ethics. By Unit 11, before presenting to execs, I felt the weight of translation: code alone won’t move the needle; stories do.

    Am I satisfied? Mostly. My technical bar rose; my communication bar nudged upward, my ethics bar has begun to grow. But i still have a long way to go. Reflections are only as good as action. I have added all actions to my to do list and I will complete them.

    References

    The University of Edinburgh (2024) Reflection Toolkit. Gibbs' Reflective Cycle. Available from: https://reflection.ed.ac.uk/reflectors-toolkit/reflecting-on-experience/gibbs-reflective-cycle [Accessed 10 April 2025]

    Rolfe, G., Freshwater, D., & Jasper, M. (2001). Critical Reflection for Nursing and the Helping Professions: A User's Guide. Available at: Critical reflection for nursing and the helping professions : a user's guide : Rolfe, Gary : Free Download, Borrow, and Streaming : Internet Archive [Accessed 10 April 2025]

    Walland, E., Shaw, S. (2020) E-portfolios in teaching, learning and assessment: tensions in theory and praxis. Available from: https://www.tandfonline.com/doi/full/10.1080/1475939X.2022.2074087#abstract [Accessed 10 April 2025]